Skip to main content

Autonomous Vehicle Development in Democracy

Addressing the Risks of Self-Driving Cars while Supporting a Fertile Environment for Innovation

Introduction

Technology is not developed in a vacuum – scientists, engineers, business professionals and politicians drive the creation and distribution of these innovations, often in light of serious potential risk. Government can, and does, play a powerful role in this process. From developing innovative taxation policies, to funding and regulation, government can both accelerate and hinder advancement. In contrast to Europe, the United States is approaching the regulation of self-driving cars in the same manner that it has approached automobile regulation since the advent of the vehicle. It encourages private organizations to build their own safety regimes, allow municipalities to create a federated network of regulations based on their experience with the technology, and then, if deemed necessary, step in to standardize and enforce the regulations. Yet, the regulation ‘laissez-faire’ holds because the private safety regimes in testing and early deployment have largely worked. With the exception of the high-profile pedestrian fatality due to a self-driving vehicle in 2017, real-world deployment (although limited in scale) remains significantly safer than human drivers and there has been little emphasis from the public to regulate self-driving cars. The United States avoids hindering the technology by allowing private organizations involved in the intricacies of the technology to create their own safety policies in collaboration with their insurers. Yet, as the technology deploys, accidents will happen with more frequency and challenging choices balancing the benefit and detriment of autonomous vehicles must be made. Just as with the rise of pedestrian deaths in the 1920’s, US regulation will develop in response to real-world issues that are not adequately addressed by the private sector. There will be pitfalls, missteps, failed investments and malicious use of advanced technologies along the way - politicians and safety regulators must learn and understand the intricacies of this technology now, before they become ill-equipped to fulfill their role when the time comes.

Automobile Safety Statistics

Self-Driving cars (also known as Autonomous Vehicles) are not simply a technology showpiece – they address serious challenges in modern life. According to the Association for Safe International Road Travel, approximately 1.35 million people die in car crashes each year, and anywhere between twenty to fifty million people incur serious injuries, including long-term disabilities. Globally, vehicle crashes are the leading cause of death for people between the ages of 5-29. Despite the rhetoric about America’s ‘Crumbling Infrastructure’, the National Highway Transportation and Safety Administration (NHTSA) notes that 94% of crashes are due to human error. The United States boasts a developed road infrastructure, but still averages 36K deaths due to crashes per year. 2019 saw the highest level of pedestrian deaths in the past thirty years, with 6590 fatalities. Economically, the NHTSA estimates a $240B annual loss due to death or decreased quality of life, $57.6B due to lost workplace productivity and $242B in lost economic activity due to traffic. Redwood Logistics, a shipping consultancy with investments in the technology estimates that transitioning the trucking industry to operate autonomously primarily on nights and weekends can reduce traffic congestion by 75%. AARP, an advocate and interest group for individuals over fifty, notes that there are 49M elderly and 53M disabled people that lack convenient and reliable transportation. Self-driving vehicles could unlock opportunity for this group, not to mention minors facing the same issues. The safety and economic benefits of unlocking the full value of self-driving cars makes it a compelling and important technology to advance while regulating correctly.

Autonomous Vehicle Levels and Standards

Although self-driving cars are currently only deployed in limited circumstances (as of 2021), they are actively addressing these safety challenges. There are six levels of autonomous driving, as defined by the Society of Automotive Engineers.

  • Level 0:
    • No Automation: Regular vehicle with no additional features
  • Level 1:
    • Driver Assistance: Can take over partially in limited circumstances, such as adaptive cruise control or automated parallel parking
  • Level 2:
    • Partial Automation: Combines several Driver Assistance functions, although the driver must be able to take over the vehicle at any time
  • Level 3:
    • Conditional Automation: Driver can fully disengage while vehicle is operating autonomously. Works only in certain circumstances.
  • Level 4:
    • High Automation: operates in most conditions
  • Level 5:
    • Full Automation: Operates in all conditions anywhere

In a closed course with other modern vehicles connected with sensors and well-maintained streets, the technology works nearly flawlessly. In the limited deployment of vehicles in certain cities from several companies, it has also performed admirably. But the challenge in developing self-driving cars is addressing the infinite number of potential real-world events. For example, how will an autonomous vehicle react in sleet while driving near a pothole, faded paint on the road, a vehicle on both sides and a pedestrian crossing? Equipped with advanced sensors capable of ingesting massive amounts of data, autonomous vehicles are capable of making decisions, perceiving threats and obstacles and taking action exponentially faster than a human. Yet the final component of driving with human intuition is extremely challenging, and while the technology is 99% of the way there, that last stretch remains elusive. How does a government regulate autonomous vehicles with such potential without hindering development?

Current Development

The only way to achieve Level four and Level five self-driving cars is by constantly testing and improving algorithms through real-world experience. This is a very time-consuming stage and requires interaction between self-driving cars and the public. One of the most effective ways that AV companies have found to advance their technology is by focusing on highway travel and shipping. Highways are managed by state authorities with additional funding from the federal government, they are well-maintained and connect all the major population centers across the nation. Daimler, Waymo (Google’s Self-driving car company), and Amazon all maintain a strong focus on highway travel. By testing and improving the technology in a more controlled environment than in dense city traffic, AV companies can gradually improve the perception of the technology to the public while simultaneously improving the efficiency of the shipping industry. By showcasing the technology and illustrating the potential in a way that keeps the public safe and reduces traffic congestion, autonomous vehicles can simplify the task at hand. The United States has a rich history of developing regulations around new technologies gradually, and by gradually reducing the perceived risk profile of self-driving cars, AV companies can continue to operate in a regulatory grey zone and develop the industry, all while keeping the public safe.

US History of regulations

Lee Vinsel wrote the book ‘Moving Violations’, which follows the development of safety culture and regulation with the advent of the automobile. Initially, automobile regulation was not necessary, as there were only a handful of vehicles on the road. Yet they were prone to malfunction so car enthusiasts created automobile clubs to address the first wave of safety incidents - mainly from the perspective of the driver. When someone bought a vehicle, they bought a shell of a vehicle that had a seat, a frame, four tires and an engine – anything else had to be purchased afterwards, including rubber for the pedals, windshield, doors and headlights. The government allowed automobile clubs to decipher the chaos and work with local municipalities to create and enforce rules of the road. Even issues such as which side of the road to drive on was treated informally until more cars were manufactured. Automobile clubs grew nationally, worked with municipalities to standardize traffic conditions, and eventually coalesced into pseudo-government agencies such as the ‘Society of Automotive Engineers’ and the ‘National Safety Council’. As deaths due to vehicle crashes rose, from 8200 in 1916 to 18,400 in 1923, the federal government began to streamline the process of mandating safety features in the automobiles. Vinsel writes,

A wide variety of groups and individuals - automobile manufacturers, courts, city governments, insurance executives, professional societies, charitable organizations, churches, schools, independent experts, and of course, individual drivers - took some responsibility for managing the risks that attended the car.
Today, the Federal government largely regulates the manufacture of vehicles to ensure they maintain safety features, while the local municipalities regulate the operation of vehicles. America’s long history of federalism and hands-off regulation continues as it deals with the complexities of regulating autonomous vehicles - it maintains a largely hands-off approach, only stepping in when events call for it.

Perceptions of Technology Drives Automotive Regulation

The United States has a unique history of allowing automotive regulation to develop based upon the public’s perception of risk. The Federal government has, for the most part, stepped in only when private entities requested it or were not able to adequately address safety issues. That legacy continues with autonomous vehicles (AV). Partners for Automated Vehicle Education (PAVE), a research and advocacy group dedicated to informing the public about the benefits of self-driving vehicles notes that the public remains wary of the technology – 48% of their respondents noted that they did not trust the technology and would not use a self-driving car if offered. Yet, 60% of the respondents said that they would trust the technology more if they were better informed about it, and 58% of respondents that accepted a test ride in an AV changed their mind about the technology. When asked about their knowledge about high-profile Tesla and Uber crashes that ended in fatalities, only 7% recalled basic details. PAVE notes that the perception of AV is driven largely by the lack of real-world exposure to it (which leads to erroneous assumptions) and entertainment (i.e., Sci-Fi movies). While the high-profile incidents involving AV have a profound effect on those knowledgeable about them, this only applies to a small proportion of society, and the lack of public outcry has not yet risen to the level that demands immediate government intervention across the board. AV companies are committed to keeping it that way by focusing internally on safety.

Uber Self-Driving Vehicle Incident

Self-Driving cars, like virtually all new technologies, will unfortunately have setbacks. In 2018, a self-driving car that was testing new equipment killed a pedestrian. The vehicle, owned and operated by Uber’s AV division, was traveling autonomously in Tempe, Arizona with a safety driver at approximately 45 miles per hour on a 4-lane road at 10pm. Elaine Herzberg ran across the road while pushing a bicycle and was struck and killed by the vehicle. While the system successfully observed Herzberg, it had difficulty determining the type of object due to the combined shape of the bicycle and human moving across the road at an unexpected place. By the time the system initiated a stop at 1.3 seconds before impact, it did not have enough time to avoid hitting her. The safety driver was alone and the interior cab video shows that throughout the trip she was intermittently watching television on her smartphone - she was ultimately charged by the state of Arizona. Uber ultimately shuttered its program and sold it to Aurora Innovations, and all other companies ramped up their safety programs to ensure they would not be at the center of nation-wide scrutiny. However, there were compelling questions raised in this incident. The sensors accurately identified Herzberg several seconds before a human would have perceived her, but a chain of decisions by Uber and a failure to address certain edge cases created a 4.5 second delay in taking action. Aptiv, an AV sensor manufacturer released a statement that its advanced driver-assistance system was disabled by Uber before testing. Intel’s Mobileye, another supplier of AV technology, recreated the incident and claimed that its system could identify Herzberg with enough time to avoid the incident. Not to make light of the circumstance, but Herzberg was crossing the road illegally and was later found to have methamphetamine and marijuana in her system - which perhaps explains why she ran in front of a well-lit moving vehicle.

Impact of Uber Incident

There was a long chain of failures that led to the first pedestrian death due to a self-driving vehicle:

  1. The safety driver was not paying attention
  2. Uber disabled certain sensors and features and sensors
  3. Herzberg crossed the road at night directly in front of a well-lit car
  4. The software algorithms had trouble identifying an object that was a combination of a pedestrian and a bike
  5. Lower-cost sensors did not interact well with the software that was built and designed for use with other sensors.

This avoidable death involved a level of technical failure, but just like the 94% of crashes due to human-error, there was a human-technology interaction every step of the way. The National Transportation and Safety Board determined that the overall issue was an ‘inadequate safety culture’ at Uber, as opposed to a systemic failure of the technology. Just like the death of Bridget Driscoll, the first pedestrian killed by an automobile in 1896 when she was struck by a joyriding enthusiast while walking in a park in London, the media covered this event extensively. Several publications and newspapers called for a ban on self-driving cars, while others focused on the fact that this is just the start in a rise of deaths. Yet, there are over six thousand pedestrians killed by vehicles every year – while tragic, Herzberg’s death is rare, and has not been repeated as of 2022. In his article ‘The Social Fabric at Risk: Toward the Social Transformation of Risk Analysis’, James Short argues that ‘potential benefits or positive aspects of risk tend to receive far less attention’ than isolated events. In his view, the entire social fabric is dependent on understanding risk and making calculated decisions based on the projected outcome. Yet, the term is associated almost solely with aversion and hazard. The first definition of ‘risk’ in the Merriam-Webster dictionary is ‘possibility of loss or injury’ – none of the available definitions mention the fact that risk balances beneficial and negative outcomes. With reference to self-driving cars, analysis of risk is crucial if the intent is to improve the abysmal safety record that humans currently have driving. While this leads to uncomfortable conversations, governments, the self-driving car industry and safety advocates must make a clear case to society that while self-driving cars pose certain risks, there is an expectation that the beneficial outcome outweighs the downsides. If AV can give mobility to millions of our youth, elderly and disabled, transform the shipping industry and reduce fatalities of car crashes, does this balance out the inevitable tragic incidents that will undoubtedly occur along the way?

The Precautionary Principle

Kenneth Foster published “Science and the Precautionary Principle” in Science Magazine in the year 2000, discussing the controversy around the subject, which emerged in the late 1970s as Europe sought to revamp their environmental policies. Saul Halfon defined the principle as a ‘regulatory approach, under conditions of scientific uncertainty, requiring that a new chemical or technology be regulated or banned until it is proven safe’. While this principle has played a key role in European law, Foster argues that it dramatically hinders innovation and development and is used as a political and economic tool. The US Chamber of Commerce specifically addressed and rejected the precautionary principle in a release in August 2010. Unlike the European model, the US “supports a science-based approach to risk management, where risk is assessed based on scientifically sound and technically rigorous standards.” Self-driving vehicles do not work with the precautionary principle, as it is impossible to know that they are absolutely safe without interacting with the real-world. Simulations of the software only go so far, and there must be the freedom to develop this vehicle in the manner in which it will ultimately operate. While European AV companies are also working on the issue, Daimler-Mercedes, Volkswagen and BMW are all waiting for approval of self-driving vehicle features in Europe. Almost all leading European automotive manufacturers maintain a testing branch in the United States because of regulatory difficulties at home. The Auto industry’s European chief also declared that the stagnant regulatory environment at the United Nations Economic Commission for Europe (UNECE) was debilitating the industry, forcing producers to look elsewhere for expansion. Waymo CEO John Krafcik noted that “It's as if an Airbus A320 with 150 people on board was crashing every hour of every day all year long while Europe figures out how to address the issue.” It remains to be seen which approach will play out in the end – Great Britain and Germany have both created separate national initiatives to incorporate investments of self-driving vehicles with Internet of Things, 5G and connected vehicles in anticipation of the rollout of Level four and five self-driving vehicles.

Government Regulation

The history of the United States’ approach to regulating self-driving cars is similar to the 1930’s. The government allowed private organizations and local municipalities to develop and design their own safety regimes, only stepping in on rare occasions after certain events created a social outcry. As of 2021, the federal government has very little regulation involving self-driving cars. It delegates authority to states to develop their own regulations and incentives. The National Conference of State Legislatures notes that by the end of 2018, 15 out of 50 states have passed specific legislation addressing self-driving vehicles. Some examples of state legislation to both regulate and incentivize AVs is New Mexico providing three hundred miles of highway available for AV testing, education about dealing with AVs for law enforcement and first responders, as well as sensors that help operations. The Congressional Research Service published ‘Issues in Autonomous Vehicle Testing and Deployment’, referencing five key issues that prevent centralized regulation. This includes

  1. Access to private data
  2. Difficulty regulating Artificial intelligence, which is not designed for human understanding
  3. The federalist approach to regulation
    1. Traditionally, federal government is responsible for manufacturing and vehicle safety, while states manage operation of vehicle.
  4. Insurance
    1. Who is responsible in the event of an incident?

If the federal government intervened to create a standard for AV, it would contravene much of the existing regulation that states have already created. Additionally, certain states, such as Texas, New Mexico and Arizona, provide monetary incentives to AV companies and are more willing to accept the risk that comes with AVs operating in their states. In exchange, they seek to be at the forefront of the benefits.
In response to Uber incident, Lawmakers on the Senate Commerce, Science, and Transportation Committee met to discuss the role of regulation and technology. The contentious nature of the meeting is indicative of the challenges balancing risk of innovation. Throughout, the National Transportation Safety Board and National Highway Traffic Safety Administration battled over roles and responsibilities. NTSB claimed NHTSA is not giving direction to the industry, and Acting director Jennifer Homendy declared “In my opinion, they’ve [NTSB] put technology advancement here before saving lives”, and cynically suggested that they rename the NHTSA public guidelines “The Vision for Lax Safety.” James Owens, acting head of NHTSA angrily retorted “If we establish standards too quickly, we run the risk of stymieing innovation. So, we want to step back we want to let the innovation occur, and the competition occur.” While the NHTSA suggests that AV companies file voluntary safety review documentation, only sixteen of 80 have filed, and of those sixteen the NTSB notes that the data within is "virtually unusable”. While this meeting intended to set the stage for regulation, the differing views on the role of regulation and innovation in the face of real-world risks remains a politically fraught area.

Conclusion

The United States has a rich history of ‘laissez-faire’ governance until there is enough public emphasis on the subject that it needs to be addressed. The current process of light regulation has, for the most part, successfully allowed the United States to lead the world in autonomous vehicle technology. Even without federal standards, the self-driving car industry has largely policed itself, establishing rigid safety protocols with well-trained backup drivers and focusing on highway travel in a quest to bring their technology to fruition while avoiding the national spotlight of a deadly catastrophe, such as the Uber incident. By upholding high internal safety standards, the AV industry avoids regulation that could possibly stymie innovation and keeps the federal government from intervening in a way that could impact their future. From a governmental perspective, they are counting that the risks associated in developing self-driving cars will pave the way for dramatic benefits down the line in the future. Yet, there will undoubtedly be tragic incidents that result of bringing this innovation to a broader audience. Everyone has a role to play to ensure this succeeds. Government officials should work to become knowledgeable about how this technology works, its pros and cons, and be able to articulate their views to the public. The AV industry must maintain high internal standards, continue to use every available safety precaution available, deploy the technology sustainably, and address the concerns of the public. Finally, the public should take the time to educate themselves on the industry.